1. 14 4月, 2014 2 次提交
    • D
      xfs: write failure beyond EOF truncates too much data · 72ab70a1
      Dave Chinner 提交于
      If we fail a write beyond EOF and have to handle it in
      xfs_vm_write_begin(), we truncate the inode back to the current inode
      size. This doesn't take into account the fact that we may have
      already made successful writes to the same page (in the case of block
      size < page size) and hence we can truncate the page cache away from
      blocks with valid data in them. If these blocks are delayed
      allocation blocks, we now have a mismatch between the page cache and
      the extent tree, and this will trigger - at minimum - a delayed
      block count mismatch assert when the inode is evicted from the cache.
      We can also trip over it when block mapping for direct IO - this is
      the most common symptom seen from fsx and fsstress when run from
      xfstests.
      
      Fix it by only truncating away the exact range we are updating state
      for in this write_begin call.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Tested-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      72ab70a1
    • D
      xfs: kill buffers over failed write ranges properly · 4ab9ed57
      Dave Chinner 提交于
      When a write fails, if we don't clear the delalloc flags from the
      buffers over the failed range, they can persist beyond EOF and cause
      problems. writeback will see the pages in the page cache, see they
      are dirty and continually retry the write, assuming that the page
      beyond EOF is just racing with a truncate. The page will eventually
      be released due to some other operation (e.g. direct IO), and it
      will not pass through invalidation because it is dirty. Hence it
      will be released with buffer_delay set on it, and trigger warnings
      in xfs_vm_releasepage() and assert fail in xfs_file_aio_write_direct
      because invalidation failed and we didn't write the corect amount.
      
      This causes failures on block size < page size filesystems in fsx
      and fsstress workloads run by xfstests.
      
      Fix it by completely trashing any state on the buffer that could be
      used to imply that it contains valid data when the delalloc range
      over the buffer is punched out during the failed write handling.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Tested-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      4ab9ed57
  2. 04 4月, 2014 1 次提交
  3. 07 3月, 2014 1 次提交
    • D
      xfs: xfs_check_page_type buffer checks need help · a49935f2
      Dave Chinner 提交于
      xfs_aops_discard_page() was introduced in the following commit:
      
        xfs: truncate delalloc extents when IO fails in writeback
      
      ... to clean up left over delalloc ranges after I/O failure in
      ->writepage(). generic/224 tests for this scenario and occasionally
      reproduces panics on sub-4k blocksize filesystems.
      
      The cause of this is failure to clean up the delalloc range on a
      page where the first buffer does not match one of the expected
      states of xfs_check_page_type(). If a buffer is not unwritten,
      delayed or dirty&mapped, xfs_check_page_type() stops and
      immediately returns 0.
      
      The stress test of generic/224 creates a scenario where the first
      several buffers of a page with delayed buffers are mapped & uptodate
      and some subsequent buffer is delayed. If the ->writepage() happens
      to fail for this page, xfs_aops_discard_page() incorrectly skips
      the entire page.
      
      This then causes later failures either when direct IO maps the range
      and finds the stale delayed buffer, or we evict the inode and find
      that the inode still has a delayed block reservation accounted to
      it.
      
      We can easily fix this xfs_aops_discard_page() failure by making
      xfs_check_page_type() check all buffers, but this breaks
      xfs_convert_page() more than it is already broken. Indeed,
      xfs_convert_page() wants xfs_check_page_type() to tell it if the
      first buffers on the pages are of a type that can be aggregated into
      the contiguous IO that is already being built.
      
      xfs_convert_page() should not be writing random buffers out of a
      page, but the current behaviour will cause it to do so if there are
      buffers that don't match the current specification on the page.
      Hence for xfs_convert_page() we need to:
      
      	a) return "not ok" if the first buffer on the page does not
      	match the specification provided to we don't write anything;
      	and
      	b) abort it's buffer-add-to-io loop the moment we come
      	across a buffer that does not match the specification.
      
      Hence we need to fix both xfs_check_page_type() and
      xfs_convert_page() to work correctly with pages that have mixed
      buffer types, whilst allowing xfs_aops_discard_page() to scan all
      buffers on the page for a type match.
      Reported-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      a49935f2
  4. 10 2月, 2014 1 次提交
  5. 19 12月, 2013 1 次提交
  6. 24 11月, 2013 1 次提交
    • K
      block: Abstract out bvec iterator · 4f024f37
      Kent Overstreet 提交于
      Immutable biovecs are going to require an explicit iterator. To
      implement immutable bvecs, a later patch is going to add a bi_bvec_done
      member to this struct; for now, this patch effectively just renames
      things.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Yehuda Sadeh <yehuda@inktank.com>
      Cc: Sage Weil <sage@inktank.com>
      Cc: Alex Elder <elder@inktank.com>
      Cc: ceph-devel@vger.kernel.org
      Cc: Joshua Morris <josh.h.morris@us.ibm.com>
      Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: Boaz Harrosh <bharrosh@panasas.com>
      Cc: Benny Halevy <bhalevy@tonian.com>
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Chris Mason <chris.mason@fusionio.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Jaegeuk Kim <jaegeuk.kim@samsung.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Prasad Joshi <prasadjoshi.linux@gmail.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: KONISHI Ryusuke <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Ben Myers <bpm@sgi.com>
      Cc: xfs@oss.sgi.com
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Guo Chao <yan@linux.vnet.ibm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Asai Thambi S P <asamymuthupa@micron.com>
      Cc: Selvan Mani <smani@micron.com>
      Cc: Sam Bradshaw <sbradshaw@micron.com>
      Cc: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
      Cc: "Roger Pau Monné" <roger.pau@citrix.com>
      Cc: Jan Beulich <jbeulich@suse.com>
      Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchand@redhat.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Peng Tao <tao.peng@emc.com>
      Cc: Andy Adamson <andros@netapp.com>
      Cc: fanchaoting <fanchaoting@cn.fujitsu.com>
      Cc: Jie Liu <jeff.liu@oracle.com>
      Cc: Sunil Mushran <sunil.mushran@gmail.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Namjae Jeon <namjae.jeon@samsung.com>
      Cc: Pankaj Kumar <pankaj.km@samsung.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Mel Gorman <mgorman@suse.de>6
      4f024f37
  7. 31 10月, 2013 1 次提交
    • D
      xfs: prevent stack overflows from page cache allocation · ad22c7a0
      Dave Chinner 提交于
      Page cache allocation doesn't always go through ->begin_write and
      hence we don't always get the opportunity to set the allocation
      context to GFP_NOFS. Failing to do this means we open up the direct
      relcaim stack to recurse into the filesystem and consume a
      significant amount of stack.
      
      On RHEL6.4 kernels we are seeing ra_submit() and
      generic_file_splice_read() from an nfsd context recursing into the
      filesystem via the inode cache shrinker and evicting inodes. This is
      causing truncation to be run (e.g EOF block freeing) and causing
      bmap btree block merges and free space btree block splits to occur.
      These btree manipulations are occurring with the call chain already
      30 functions deep and hence there is not enough stack space to
      complete such operations.
      
      To avoid these specific overruns, we need to prevent the page cache
      allocation from recursing via direct reclaim. We can do that because
      the allocation functions take the allocation context from that which
      is stored in the mapping for the inode. We don't set that right now,
      so the default is GFP_HIGHUSER_MOVABLE, which is effectively a
      GFP_KERNEL context. We need it to be the equivalent of GFP_NOFS, so
      when we initialise an inode, set the mapping gfp mask appropriately.
      
      This makes the use of AOP_FLAG_NOFS redundant from other parts of
      the XFS IO path, so get rid of it.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      ad22c7a0
  8. 24 10月, 2013 3 次提交
    • D
      xfs: decouple inode and bmap btree header files · a4fbe6ab
      Dave Chinner 提交于
      Currently the xfs_inode.h header has a dependency on the definition
      of the BMAP btree records as the inode fork includes an array of
      xfs_bmbt_rec_host_t objects in it's definition.
      
      Move all the btree format definitions from xfs_btree.h,
      xfs_bmap_btree.h, xfs_alloc_btree.h and xfs_ialloc_btree.h to
      xfs_format.h to continue the process of centralising the on-disk
      format definitions. With this done, the xfs inode definitions are no
      longer dependent on btree header files.
      
      The enables a massive culling of unnecessary includes, with close to
      200 #include directives removed from the XFS kernel code base.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      a4fbe6ab
    • D
      xfs: decouple log and transaction headers · 239880ef
      Dave Chinner 提交于
      xfs_trans.h has a dependency on xfs_log.h for a couple of
      structures. Most code that does transactions doesn't need to know
      anything about the log, but this dependency means that they have to
      include xfs_log.h. Decouple the xfs_trans.h and xfs_log.h header
      files and clean up the includes to be in dependency order.
      
      In doing this, remove the direct include of xfs_trans_reserve.h from
      xfs_trans.h so that we remove the dependency between xfs_trans.h and
      xfs_mount.h. Hence the xfs_trans.h include can be moved to the
      indicate the actual dependencies other header files have on it.
      
      Note that these are kernel only header files, so this does not
      translate to any userspace changes at all.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      239880ef
    • D
      xfs: create a shared header file for format-related information · 70a9883c
      Dave Chinner 提交于
      All of the buffer operations structures are needed to be exported
      for xfs_db, so move them all to a common location rather than
      spreading them all over the place. They are verifying the on-disk
      format, so while xfs_format.h might be a good place, it is not part
      of the on disk format.
      
      Hence we need to create a new header file that we centralise these
      related definitions. Start by moving the bffer operations
      structures, and then also move all the other definitions that have
      crept into xfs_log_format.h and xfs_format.h as there was no other
      shared header file to put them in.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      70a9883c
  9. 02 10月, 2013 1 次提交
  10. 13 9月, 2013 1 次提交
  11. 04 9月, 2013 1 次提交
    • C
      direct-io: Implement generic deferred AIO completions · 7b7a8665
      Christoph Hellwig 提交于
      Add support to the core direct-io code to defer AIO completions to user
      context using a workqueue.  This replaces opencoded and less efficient
      code in XFS and ext4 (we save a memory allocation for each direct IO)
      and will be needed to properly support O_(D)SYNC for AIO.
      
      The communication between the filesystem and the direct I/O code requires
      a new buffer head flag, which is a bit ugly but not avoidable until the
      direct I/O code stops abusing the buffer_head structure for communicating
      with the filesystems.
      
      Currently this creates a per-superblock unbound workqueue for these
      completions, which is taken from an earlier patch by Jan Kara.  I'm
      not really convinced about this use and would prefer a "normal" global
      workqueue with a high concurrency limit, but this needs further discussion.
      
      JK: Fixed ext4 part, dynamic allocation of the workqueue.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      7b7a8665
  12. 21 8月, 2013 1 次提交
  13. 13 8月, 2013 3 次提交
    • J
      xfs: refactor xfs_trans_reserve() interface · 3d3c8b52
      Jie Liu 提交于
      With the new xfs_trans_res structure has been introduced, the log
      reservation size, log count as well as log flags are pre-initialized
      at mount time.  So it's time to refine xfs_trans_reserve() interface
      to be more neat.
      
      Also, introduce a new helper M_RES() to return a pointer to the
      mp->m_resv structure to simplify the input.
      Signed-off-by: NJie Liu <jeff.liu@oracle.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      3d3c8b52
    • D
      xfs: kill xfs_vnodeops.[ch] · c24b5dfa
      Dave Chinner 提交于
      Now we have xfs_inode.c for holding kernel-only XFS inode
      operations, move all the inode operations from xfs_vnodeops.c to
      this new file as it holds another set of kernel-only inode
      operations. The name of this file traces back to the days of Irix
      and it's vnodes which we don't have anymore.
      
      Essentially this move consolidates the inode locking functions
      and a bunch of XFS inode operations into the one file. Eventually
      the high level functions will be merged into the VFS interface
      functions in xfs_iops.c.
      
      This leaves only internal preallocation, EOF block manipulation and
      hole punching functions in vnodeops.c. Move these to xfs_bmap_util.c
      where we are already consolidating various in-kernel physical extent
      manipulation and querying functions.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      c24b5dfa
    • D
      xfs: create xfs_bmap_util.[ch] · 68988114
      Dave Chinner 提交于
      There is a bunch of code in xfs_bmap.c that is kernel specific and
      not shared with userspace. To minimise the difference between the
      kernel and userspace code, shift this unshared code to
      xfs_bmap_util.c, and the declarations to xfs_bmap_util.h.
      
      The biggest issue here is xfs_bmap_finish() - userspace has it's own
      definition of this function, and so we need to move it out of
      xfs_bmap.[ch]. This means several other files need to include
      xfs_bmap_util.h as well.
      
      It also introduces and interesting dance for the stack switching
      code in xfs_bmapi_allocate(). The stack switching/workqueue code is
      actually moved to xfs_bmap_util.c, so that userspace can simply use
      a #define in a header file to connect the dots without needing to
      know about the stack switch code at all.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      68988114
  14. 23 7月, 2013 1 次提交
    • J
      xfs: fix assertion failure in xfs_vm_write_failed() · 58e59854
      Jie Liu 提交于
      In xfs_vm_write_failed(), we evaluate the block_offset of pos with
      PAGE_MASK which is an unsigned long.  That is fine on 64-bit platforms
      regardless of whether the request pos is 32-bit or 64-bit.  However, on
      32-bit platforms the value is 0xfffff000 and so the high 32 bits in it
      will be masked off with (pos & PAGE_MASK) for a 64-bit pos.
      
      As a result, the evaluated block_offset is incorrect which will cause
      this failure ASSERT(block_offset + from == pos); and potentially pass
      the wrong block to xfs_vm_kill_delalloc_range().
      
      In this case, we can get a kernel panic if CONFIG_XFS_DEBUG is enabled:
      
      XFS: Assertion failed: block_offset + from == pos, file: fs/xfs/xfs_aops.c, line: 1504
      
      ------------[ cut here ]------------
       kernel BUG at fs/xfs/xfs_message.c:100!
       invalid opcode: 0000 [#1] SMP
       ........
       Pid: 4057, comm: mkfs.xfs Tainted: G           O 3.9.0-rc2 #1
       EIP: 0060:[<f94a7e8b>] EFLAGS: 00010282 CPU: 0
       EIP is at assfail+0x2b/0x30 [xfs]
       EAX: 00000056 EBX: f6ef28a0 ECX: 00000007 EDX: f57d22a4
       ESI: 1c2fb000 EDI: 00000000 EBP: ea6b5d30 ESP: ea6b5d1c
       DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
       CR0: 8005003b CR2: 094f3ff4 CR3: 2bcb4000 CR4: 000006f0
       DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
       DR6: ffff0ff0 DR7: 00000400
       Process mkfs.xfs (pid: 4057, ti=ea6b4000 task=ea5799e0 task.ti=ea6b4000)
       Stack:
       00000000 f9525c48 f951fa80 f951f96b 000005e4 ea6b5d7c f9494b34 c19b0ea2
       00000066 f3d6c620 c19b0ea2 00000000 e9a91458 00001000 00000000 00000000
       00000000 c15c7e89 00000000 1c2fb000 00000000 00000000 1c2fb000 00000080
       Call Trace:
       [<f9494b34>] xfs_vm_write_failed+0x74/0x1b0 [xfs]
       [<c15c7e89>] ? printk+0x4d/0x4f
       [<f9494d7d>] xfs_vm_write_begin+0x10d/0x170 [xfs]
       [<c110a34c>] generic_file_buffered_write+0xdc/0x210
       [<f949b669>] xfs_file_buffered_aio_write+0xf9/0x190 [xfs]
       [<f949b7f3>] xfs_file_aio_write+0xf3/0x160 [xfs]
       [<c115e504>] do_sync_write+0x94/0xd0
       [<c115ed1f>] vfs_write+0x8f/0x160
       [<c115e470>] ? wait_on_retry_sync_kiocb+0x50/0x50
       [<c115f017>] sys_write+0x47/0x80
       [<c15d860d>] sysenter_do_call+0x12/0x28
       .............
       EIP: [<f94a7e8b>] assfail+0x2b/0x30 [xfs] SS:ESP 0068:ea6b5d1c
       ---[ end trace cdd9af4f4ecab42f ]---
       Kernel panic - not syncing: Fatal exception
      
      In order to avoid this, we can evaluate the block_offset of the start
      of the page by using shifts rather than masks the mismatch problem.
      
      Thanks Dave Chinner for help finding and fixing this bug.
      Reported-by: NMichael L. Semon <mlsemon35@gmail.com>
      Reviewed-by: NDave Chinner <david@fromorbit.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NJie Liu <jeff.liu@oracle.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      58e59854
  15. 25 5月, 2013 1 次提交
    • D
      xfs: fix sub-page blocksize data integrity writes · 480d7467
      Dave Chinner 提交于
      FSX on 512 byte block size filesystems has been failing for some
      time with corrupted data. The fault dates back to the change in
      the writeback data integrity algorithm that uses a mark-and-sweep
      approach to avoid data writeback livelocks.
      
      Unfortunately, a side effect of this mark-and-sweep approach is that
      each page will only be written once for a data integrity sync, and
      there is a condition in writeback in XFS where a page may require
      two writeback attempts to be fully written. As a result of the high
      level change, we now only get a partial page writeback during the
      integrity sync because the first pass through writeback clears the
      mark left on the page index to tell writeback that the page needs
      writeback....
      
      The cause is writing a partial page in the clustering code. This can
      happen when a mapping boundary falls in the middle of a page - we
      end up writing back the first part of the page that the mapping
      covers, but then never revisit the page to have the remainder mapped
      and written.
      
      The fix is simple - if the mapping boundary falls inside a page,
      then simple abort clustering without touching the page. This means
      that the next ->writepage entry that write_cache_pages() will make
      is the page we aborted on, and xfs_vm_writepage() will map all
      sections of the page correctly. This behaviour is also optimal for
      non-data integrity writes, as it results in contiguous sequential
      writeback of the file rather than missing small holes and having to
      write them a "random" writes in a future pass.
      
      With this fix, all the fsx tests in xfstests now pass on a 512 byte
      block size filesystem on a 4k page machine.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      
      (cherry picked from commit 49b137cb)
      480d7467
  16. 22 5月, 2013 2 次提交
    • L
      xfs: use ->invalidatepage() length argument · 34097dfe
      Lukas Czerner 提交于
      ->invalidatepage() aop now accepts range to invalidate so we can make
      use of it in xfs_vm_invalidatepage()
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Acked-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Cc: xfs@oss.sgi.com
      34097dfe
    • L
      mm: change invalidatepage prototype to accept length · d47992f8
      Lukas Czerner 提交于
      Currently there is no way to truncate partial page where the end
      truncate point is not at the end of the page. This is because it was not
      needed and the functionality was enough for file system truncate
      operation to work properly. However more file systems now support punch
      hole feature and it can benefit from mm supporting truncating page just
      up to the certain point.
      
      Specifically, with this functionality truncate_inode_pages_range() can
      be changed so it supports truncating partial page at the end of the
      range (currently it will BUG_ON() if 'end' is not at the end of the
      page).
      
      This commit changes the invalidatepage() address space operation
      prototype to accept range to be invalidated and update all the instances
      for it.
      
      We also change the block_invalidatepage() in the same way and actually
      make a use of the new length argument implementing range invalidation.
      
      Actual file system implementations will follow except the file systems
      where the changes are really simple and should not change the behaviour
      in any way .Implementation for truncate_page_range() which will be able
      to accept page unaligned ranges will follow as well.
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      d47992f8
  17. 21 5月, 2013 1 次提交
    • D
      xfs: fix sub-page blocksize data integrity writes · 49b137cb
      Dave Chinner 提交于
      FSX on 512 byte block size filesystems has been failing for some
      time with corrupted data. The fault dates back to the change in
      the writeback data integrity algorithm that uses a mark-and-sweep
      approach to avoid data writeback livelocks.
      
      Unfortunately, a side effect of this mark-and-sweep approach is that
      each page will only be written once for a data integrity sync, and
      there is a condition in writeback in XFS where a page may require
      two writeback attempts to be fully written. As a result of the high
      level change, we now only get a partial page writeback during the
      integrity sync because the first pass through writeback clears the
      mark left on the page index to tell writeback that the page needs
      writeback....
      
      The cause is writing a partial page in the clustering code. This can
      happen when a mapping boundary falls in the middle of a page - we
      end up writing back the first part of the page that the mapping
      covers, but then never revisit the page to have the remainder mapped
      and written.
      
      The fix is simple - if the mapping boundary falls inside a page,
      then simple abort clustering without touching the page. This means
      that the next ->writepage entry that write_cache_pages() will make
      is the page we aborted on, and xfs_vm_writepage() will map all
      sections of the page correctly. This behaviour is also optimal for
      non-data integrity writes, as it results in contiguous sequential
      writeback of the file rather than missing small holes and having to
      write them a "random" writes in a future pass.
      
      With this fix, all the fsx tests in xfstests now pass on a 512 byte
      block size filesystem on a 4k page machine.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      49b137cb
  18. 08 5月, 2013 1 次提交
  19. 23 3月, 2013 1 次提交
    • J
      xfs: Fix WARN_ON(delalloc) in xfs_vm_releasepage() · ff9a28f6
      Jan Kara 提交于
      When a dirty page is truncated from a file but reclaim gets to it before
      truncate_inode_pages(), we hit WARN_ON(delalloc) in
      xfs_vm_releasepage(). This is because reclaim tries to write the page,
      xfs_vm_writepage() just bails out (leaving page clean) and thus reclaim
      thinks it can continue and calls xfs_vm_releasepage() on page with dirty
      buffers.
      
      Fix the issue by redirtying the page in xfs_vm_writepage(). This makes
      reclaim stop reclaiming the page and also logically it keeps page in a
      more consistent state where page with dirty buffers has PageDirty set.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NCarlos Maiolino <cmaiolino@redhat.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      ff9a28f6
  20. 29 1月, 2013 1 次提交
  21. 26 1月, 2013 1 次提交
  22. 30 11月, 2012 1 次提交
    • D
      xfs: fix direct IO nested transaction deadlock. · 437a255a
      Dave Chinner 提交于
      The direct IO path can do a nested transaction reservation when
      writing past the EOF. The first transaction is the append
      transaction for setting the filesize at IO completion, but we can
      also need a transaction for allocation of blocks. If the log is low
      on space due to reservations and small log, the append transaction
      can be granted after wating for space as the only active transaction
      in the system. This then attempts a reservation for an allocation,
      which there isn't space in the log for, and the reservation sleeps.
      The result is that there is nothing left in the system to wake up
      all the processes waiting for log space to come free.
      
      The stack trace that shows this deadlock is relatively innocuous:
      
       xlog_grant_head_wait
       xlog_grant_head_check
       xfs_log_reserve
       xfs_trans_reserve
       xfs_iomap_write_direct
       __xfs_get_blocks
       xfs_get_blocks_direct
       do_blockdev_direct_IO
       __blockdev_direct_IO
       xfs_vm_direct_IO
       generic_file_direct_write
       xfs_file_dio_aio_writ
       xfs_file_aio_write
       do_sync_write
       vfs_write
      
      This was discovered on a filesystem with a log of only 10MB, and a
      log stripe unit of 256k whih increased the base reservations by
      512k. Hence a allocation transaction requires 1.2MB of log space to
      be available instead of only 260k, and so greatly increased the
      chance that there wouldn't be enough log space available for the
      nested transaction to succeed. The key to reproducing it is this
      mkfs command:
      
      mkfs.xfs -f -d agcount=16,su=256k,sw=12 -l su=256k,size=2560b $SCRATCH_DEV
      
      The test case was a 1000 fsstress processes running with random
      freeze and unfreezes every few seconds. Thanks to Eryu Guan
      (eguan@redhat.com) for writing the test that found this on a system
      with a somewhat unique default configuration....
      
      cc: <stable@vger.kernel.org>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NAndrew Dahl <adahl@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      437a255a
  23. 17 11月, 2012 1 次提交
    • D
      xfs: fix broken error handling in xfs_vm_writepage · 3daed8bc
      Dave Chinner 提交于
      When we shut down the filesystem, it might first be detected in
      writeback when we are allocating a inode size transaction. This
      happens after we have moved all the pages into the writeback state
      and unlocked them. Unfortunately, if we fail to set up the
      transaction we then abort writeback and try to invalidate the
      current page. This then triggers are BUG() in block_invalidatepage()
      because we are trying to invalidate an unlocked page.
      
      Fixing this is a bit of a chicken and egg problem - we can't
      allocate the transaction until we've clustered all the pages into
      the IO and we know the size of it (i.e. whether the last block of
      the IO is beyond the current EOF or not). However, we don't want to
      hold pages locked for long periods of time, especially while we lock
      other pages to cluster them into the write.
      
      To fix this, we need to make a clear delineation in writeback where
      errors can only be handled by IO completion processing. That is,
      once we have marked a page for writeback and unlocked it, we have to
      report errors via IO completion because we've already started the
      IO. We may not have submitted any IO, but we've changed the page
      state to indicate that it is under IO so we must now use the IO
      completion path to report errors.
      
      To do this, add an error field to xfs_submit_ioend() to pass it the
      error that occurred during the building on the ioend chain. When
      this is non-zero, mark each ioend with the error and call
      xfs_finish_ioend() directly rather than building bios. This will
      immediately push the ioends through completion processing with the
      error that has occurred.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      3daed8bc
  24. 15 11月, 2012 1 次提交
    • D
      xfs: remove xfs_flush_pages · 4bc1ea6b
      Dave Chinner 提交于
      It is a complex wrapper around VFS functions, but there are VFS
      functions that provide exactly the same functionality. Call the VFS
      functions directly and remove the unnecessary indirection and
      complexity.
      
      We don't need to care about clearing the XFS_ITRUNCATED flag, as
      that is done during .writepages. Hence is cleared by the VFS
      writeback path if there is anything to write back during the flush.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NAndrew Dahl <adahl@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      4bc1ea6b
  25. 14 11月, 2012 1 次提交
    • D
      xfs: fix broken error handling in xfs_vm_writepage · 7bf7f352
      Dave Chinner 提交于
      When we shut down the filesystem, it might first be detected in
      writeback when we are allocating a inode size transaction. This
      happens after we have moved all the pages into the writeback state
      and unlocked them. Unfortunately, if we fail to set up the
      transaction we then abort writeback and try to invalidate the
      current page. This then triggers are BUG() in block_invalidatepage()
      because we are trying to invalidate an unlocked page.
      
      Fixing this is a bit of a chicken and egg problem - we can't
      allocate the transaction until we've clustered all the pages into
      the IO and we know the size of it (i.e. whether the last block of
      the IO is beyond the current EOF or not). However, we don't want to
      hold pages locked for long periods of time, especially while we lock
      other pages to cluster them into the write.
      
      To fix this, we need to make a clear delineation in writeback where
      errors can only be handled by IO completion processing. That is,
      once we have marked a page for writeback and unlocked it, we have to
      report errors via IO completion because we've already started the
      IO. We may not have submitted any IO, but we've changed the page
      state to indicate that it is under IO so we must now use the IO
      completion path to report errors.
      
      To do this, add an error field to xfs_submit_ioend() to pass it the
      error that occurred during the building on the ioend chain. When
      this is non-zero, mark each ioend with the error and call
      xfs_finish_ioend() directly rather than building bios. This will
      immediately push the ioends through completion processing with the
      error that has occurred.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      7bf7f352
  26. 31 7月, 2012 1 次提交
    • J
      xfs: Convert to new freezing code · d9457dc0
      Jan Kara 提交于
      Generic code now blocks all writers from standard write paths. So we add
      blocking of all writers coming from ioctl (we get a protection of ioctl against
      racing remount read-only as a bonus) and convert xfs_file_aio_write() to a
      non-racy freeze protection. We also keep freeze protection on transaction
      start to block internal filesystem writes such as removal of preallocated
      blocks.
      
      CC: Ben Myers <bpm@sgi.com>
      CC: Alex Elder <elder@kernel.org>
      CC: xfs@oss.sgi.com
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d9457dc0
  27. 23 7月, 2012 1 次提交
  28. 22 7月, 2012 1 次提交
  29. 21 6月, 2012 1 次提交
    • A
      xfs: xfs_vm_writepage clear iomap_valid when !buffer_uptodate (REV2) · 66f93113
      Alain Renaud 提交于
      On filesytems with a block size smaller than PAGE_SIZE we currently have
      a problem with unwritten extents.  If a we have multi-block page for
      which an unwritten extent has been allocated, and only some of the
      buffers have been written to, and they are not contiguous, we can expose
      stale data from disk in the blocks between the writes after extent
      conversion.
      
      Example of a page with unwritten and real data.
      buffer  content
      0       empty  b_state = 0
      1       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      2       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      3       empty  b_state = 0
      4       empty  b_state = 0
      5       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      6       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      7       empty  b_state = 0
      
      Buffers 1, 2, 5, and 6 have been written to, leaving 0, 3, 4, and 7
      empty.  Currently buffers 1, 2, 5, and 6 are added to a single ioend,
      and when IO has completed, extent conversion creates a real extent from
      block 1 through block 6, leaving 0 and 7 unwritten.  However buffers 3
      and 4 were not written to disk, so stale data is exposed from those
      blocks on a subsequent read.
      
      Fix this by setting iomap_valid = 0 when we find a buffer that is not
      Uptodate.  This ensures that buffers 5 and 6 are not added to the same
      ioend as buffers 1 and 2.  Later these blocks will be converted into two
      separate real extents, leaving the blocks in between unwritten.
      Signed-off-by: NAlain Renaud <arenaud@sgi.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      66f93113
  30. 15 6月, 2012 2 次提交
    • D
      xfs: m_maxioffset is redundant · d2c28191
      Dave Chinner 提交于
      The m_maxioffset field in the struct xfs_mount contains the same
      value as the superblock s_maxbytes field. There is no need to carry
      two copies of this limit around, so use the VFS superblock version.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      d2c28191
    • A
      xfs: xfs_vm_writepage clear iomap_valid when !buffer_uptodate (REV2) · 7d0fa3ec
      Alain Renaud 提交于
      On filesytems with a block size smaller than PAGE_SIZE we currently have
      a problem with unwritten extents.  If a we have multi-block page for
      which an unwritten extent has been allocated, and only some of the
      buffers have been written to, and they are not contiguous, we can expose
      stale data from disk in the blocks between the writes after extent
      conversion.
      
      Example of a page with unwritten and real data.
      buffer  content
      0       empty  b_state = 0
      1       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      2       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      3       empty  b_state = 0
      4       empty  b_state = 0
      5       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      6       DATA   b_state = 0x1023 Uptodate,Dirty,Mapped,Unwritten
      7       empty  b_state = 0
      
      Buffers 1, 2, 5, and 6 have been written to, leaving 0, 3, 4, and 7
      empty.  Currently buffers 1, 2, 5, and 6 are added to a single ioend,
      and when IO has completed, extent conversion creates a real extent from
      block 1 through block 6, leaving 0 and 7 unwritten.  However buffers 3
      and 4 were not written to disk, so stale data is exposed from those
      blocks on a subsequent read.
      
      Fix this by setting iomap_valid = 0 when we find a buffer that is not
      Uptodate.  This ensures that buffers 5 and 6 are not added to the same
      ioend as buffers 1 and 2.  Later these blocks will be converted into two
      separate real extents, leaving the blocks in between unwritten.
      Signed-off-by: NAlain Renaud <arenaud@sgi.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      7d0fa3ec
  31. 15 5月, 2012 3 次提交