1. 19 4月, 2008 1 次提交
  2. 18 4月, 2008 5 次提交
  3. 07 2月, 2008 4 次提交
  4. 06 2月, 2008 1 次提交
    • C
      Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user · eebd2aa3
      Christoph Lameter 提交于
      Simplify page cache zeroing of segments of pages through 3 functions
      
      zero_user_segments(page, start1, end1, start2, end2)
      
              Zeros two segments of the page. It takes the position where to
              start and end the zeroing which avoids length calculations and
      	makes code clearer.
      
      zero_user_segment(page, start, end)
      
              Same for a single segment.
      
      zero_user(page, start, length)
      
              Length variant for the case where we know the length.
      
      We remove the zero_user_page macro. Issues:
      
      1. Its a macro. Inline functions are preferable.
      
      2. The KM_USER0 macro is only defined for HIGHMEM.
      
         Having to treat this special case everywhere makes the
         code needlessly complex. The parameter for zeroing is always
         KM_USER0 except in one single case that we open code.
      
      Avoiding KM_USER0 makes a lot of code not having to be dealing
      with the special casing for HIGHMEM anymore. Dealing with
      kmap is only necessary for HIGHMEM configurations. In those
      configurations we use KM_USER0 like we do for a series of other
      functions defined in highmem.h.
      
      Since KM_USER0 is depends on HIGHMEM the existing zero_user_page
      function could not be a macro. zero_user_* functions introduced
      here can be be inline because that constant is not used when these
      functions are called.
      
      Also extract the flushing of the caches to be outside of the kmap.
      
      [akpm@linux-foundation.org: fix nfs and ntfs build]
      [akpm@linux-foundation.org: fix ntfs build some more]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: David Chinner <dgc@sgi.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eebd2aa3
  5. 17 10月, 2007 1 次提交
  6. 16 10月, 2007 4 次提交
  7. 15 10月, 2007 2 次提交
  8. 10 7月, 2007 1 次提交
  9. 19 6月, 2007 1 次提交
  10. 09 5月, 2007 1 次提交
  11. 08 5月, 2007 4 次提交
    • L
      [XFS] Fix race in xfs_write() b/w dmapi callout and direct I/O checks. · 71dfd5a3
      Lachlan McIlroy 提交于
      In xfs_write() the iolock is dropped and reacquired in XFS_SEND_DATA()
      which means that the file could change from not-cached to cached and we
      need to redo the direct I/O checks. We should also redo the direct I/O
      checks when the file size changes regardless if O_APPEND is set or not.
      
      SGI-PV: 963483
      SGI-Modid: xfs-linux-melb:xfs-kern:28440a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      71dfd5a3
    • L
      [XFS] Fix to prevent the notorious 'NULL files' problem after a crash. · ba87ea69
      Lachlan McIlroy 提交于
      The problem that has been addressed is that of synchronising updates of
      the file size with writes that extend a file. Without the fix the update
      of a file's size, as a result of a write beyond eof, is independent of
      when the cached data is flushed to disk. Often the file size update would
      be written to the filesystem log before the data is flushed to disk. When
      a system crashes between these two events and the filesystem log is
      replayed on mount the file's size will be set but since the contents never
      made it to disk the file is full of holes. If some of the cached data was
      flushed to disk then it may just be a section of the file at the end that
      has holes.
      
      There are existing fixes to help alleviate this problem, particularly in
      the case where a file has been truncated, that force cached data to be
      flushed to disk when the file is closed. If the system crashes while the
      file(s) are still open then this flushing will never occur.
      
      The fix that we have implemented is to introduce a second file size,
      called the in-memory file size, that represents the current file size as
      viewed by the user. The existing file size, called the on-disk file size,
      is the one that get's written to the filesystem log and we only update it
      when it is safe to do so. When we write to a file beyond eof we only
      update the in- memory file size in the write operation. Later when the I/O
      operation, that flushes the cached data to disk completes, an I/O
      completion routine will update the on-disk file size. The on-disk file
      size will be updated to the maximum offset of the I/O or to the value of
      the in-memory file size if the I/O includes eof.
      
      SGI-PV: 958522
      SGI-Modid: xfs-linux-melb:xfs-kern:28322a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      ba87ea69
    • L
      [XFS] Fix race condition in xfs_write(). · 2a329631
      Lachlan McIlroy 提交于
      This change addresses a race in xfs_write() where, for direct I/O, the
      flags need_i_mutex and need_flush are setup before the iolock is acquired.
      The logic used to setup the flags may change between setting the flags and
      acquiring the iolock resulting in these flags having incorrect values. For
      example, if a file is not currently cached then need_i_mutex is set to
      zero and then if the file is cached before the iolock is acquired we will
      fail to do the flushinval before the direct write.
      
      The flush (and also the call to xfs_zero_eof()) need to be done with the
      iolock held exclusive so we need to acquire the iolock before checking for
      cached data (or if the write begins after eof) to prevent this state from
      changing. For direct I/O I've chosen to always acquire the iolock in
      shared mode initially and if there is a need to promote it then drop it
      and reacquire it.
      
      There's also some other tidy-ups including removing the O_APPEND offset
      adjustment since that work is done in generic_write_checks() (and we don't
      use offset as an input parameter anywhere).
      
      SGI-PV: 962170
      SGI-Modid: xfs-linux-melb:xfs-kern:28319a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      2a329631
    • L
      [XFS] propogate return codes from flush routines · d3cf2094
      Lachlan McIlroy 提交于
      This patch handles error return values in fs_flush_pages and
      fs_flushinval_pages. It changes the prototype of fs_flushinval_pages so we
      can propogate the errors and handle them at higher layers. I also modified
      xfs_itruncate_start so that it could propogate the error further.
      
      SGI-PV: 961990
      SGI-Modid: xfs-linux-melb:xfs-kern:28231a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NStewart Smith <stewart@flamingspork.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      d3cf2094
  12. 10 2月, 2007 3 次提交
  13. 09 12月, 2006 1 次提交
  14. 01 10月, 2006 1 次提交
  15. 28 9月, 2006 1 次提交
  16. 07 9月, 2006 2 次提交
  17. 20 6月, 2006 1 次提交
  18. 19 6月, 2006 1 次提交
  19. 09 6月, 2006 5 次提交