1. 12 3月, 2014 2 次提交
  2. 07 3月, 2014 5 次提交
  3. 06 3月, 2014 1 次提交
  4. 03 3月, 2014 1 次提交
    • S
      GFS2: Clean up journal extent mapping · b50f227b
      Steven Whitehouse 提交于
      This patch fixes a long standing issue in mapping the journal
      extents. Most journals will consist of only a single extent,
      and although the cache took account of that by merging extents,
      it did not actually map large extents, but instead was doing a
      block by block mapping. Since the journal was only being mapped
      on mount, this was not normally noticeable.
      
      With the updated code, it is now possible to use the same extent
      mapping system during journal recovery (which will be added in a
      later patch). This will allow checking of the integrity of the
      journal before any reply of the journal content is attempted. For
      this reason the code is moving to bmap.c, since it will be used
      more widely in due course.
      
      An exercise left for the reader is to compare the new function
      gfs2_map_journal_extents() with gfs2_write_alloc_required()
      
      Additionally, should there be a failure, the error reporting is
      also updated to show more detail about what went wrong.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      b50f227b
  5. 27 2月, 2014 1 次提交
  6. 25 2月, 2014 4 次提交
  7. 21 2月, 2014 1 次提交
  8. 17 2月, 2014 1 次提交
  9. 10 2月, 2014 1 次提交
  10. 07 2月, 2014 1 次提交
    • S
      GFS2: Add meta readahead field in directory entries · 44aaada9
      Steven Whitehouse 提交于
      The intent of this new field in the directory entry is to
      allow a subsequent lookup to know how many blocks, which
      are contiguous with the inode, contain metadata which relates
      to the inode. This will then allow the issuing of a single
      read to read these blocks, rather than reading the inode
      first, and then issuing a second read for the metadata.
      
      This only works under some fairly strict conditions, since
      we do not have back pointers from inodes to directory entries
      we must ensure that the blocks referenced in this way will
      always belong to the inode.
      
      This rules out being able to use this system for indirect
      blocks, as these can change as a result of truncate/rewrite.
      
      So the idea here is to restrict this to xattr blocks only
      for the time being. For most inodes, that means only a
      single block. Also, when using ACLs and/or SELinux or
      other LSMs, these will be added at inode creation time
      so that they will be contiguous with the inode on disk and
      also will almost always be needed when we read the inode in
      for permissions checks.
      
      Once an xattr block for an inode is allocated, it will never
      change until the inode is deallocated.
      
      This patch adds the new field, a further patch will add the
      readahead in due course.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      44aaada9
  11. 06 2月, 2014 2 次提交
    • B
      GFS2: Lock i_mutex and use a local gfs2_holder for fallocate · a0846a53
      Bob Peterson 提交于
      This patch causes GFS2 to lock the i_mutex during fallocate. It
      also switches from using a dinode's inode glock to using a local
      holder like the other GFS2 i_operations.
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      a0846a53
    • S
      GFS2: journal data writepages update · 774016b2
      Steven Whitehouse 提交于
      GFS2 has carried what is more or less a copy of the
      write_cache_pages() for some time. It seems that this
      copy has slipped behind the core code over time. This
      patch brings it back uptodate, and in addition adds the
      tracepoint which would otherwise be missing.
      
      We could go further, and eliminate some or all of the
      code duplication here. The issue is that if we do that,
      then the function we need to split out from the existing
      write_cache_pages(), which will look a lot like
      gfs2_jdata_write_pagevec(), would land up putting quite a
      lot of extra variables on the stack. I know that has been
      a problem in the past in the writeback code path, which
      is why I've hesitated to do it here.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      774016b2
  12. 04 2月, 2014 1 次提交
    • S
      GFS2: Allocate block for xattr at inode alloc time, if required · b2c8b3ea
      Steven Whitehouse 提交于
      This is another step towards improving the allocation of xattr
      blocks at inode allocation time. Here we take advantage of
      Christoph's recent work on ACLs to allocate a block for the
      xattrs early if we know that we will be adding ACLs to the
      inode later on. The advantage of that is that it is much
      more likely that we'll get a contiguous run of two blocks
      where the first is the inode and the second is the xattr block.
      
      We still have to fall back to the original system in case we
      don't get the requested two contiguous blocks, or in case the
      ACLs are too large to fit into the block.
      
      Future patches will move more of the ACL setting code further
      up the gfs2_inode_create() function. Also, I'd like to be
      able to do the same thing with the xattrs from LSMs in
      due course, too. That way we should be able to slowly reduce
      the number of independent transactions, at least in the
      most common cases.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      b2c8b3ea
  13. 03 2月, 2014 3 次提交
    • S
      GFS2: Plug on AIL flush · 885bceca
      Steven Whitehouse 提交于
      When we do a flush of the AIL list, we are writing out what is
      likely to be a lot of small I/Os, which are possibly in an order
      which is not ideal performance-wise. Since this is done by calling
      filemap_fdatatwrite for each individual inode's address space there
      is no overall plugging going on.
      
      In addition to that, we do not always wait for AIL i/o when we flush
      it, so that it is possible for things to get left behind on the queue.
      By adding explicit plugging here, we reduce the chances of this
      being an issues. A quick test using the AIL flush tracepoint shows a
      small, but measurable improvement.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      885bceca
    • M
      hpfs: optimize quad buffer loading · 1c0b8a7a
      Mikulas Patocka 提交于
      HPFS needs to load 4 consecutive 512-byte sectors when accessing the
      directory nodes or bitmaps.  We can't switch to 2048-byte block size
      because files are allocated in the units of 512-byte sectors.
      
      Previously, the driver would allocate a 2048-byte area using kmalloc,
      copy the data from four buffers to this area and eventually copy them
      back if they were modified.
      
      In the current implementation of the buffer cache, buffers are allocated
      in the pagecache.  That means that 4 consecutive 512-byte buffers are
      stored in consecutive areas in the kernel address space.  So, we don't
      need to allocate extra memory and copy the content of the buffers there.
      
      This patch optimizes the code to avoid copying the buffers.  It checks
      if the four buffers are stored in contiguous memory - if they are not,
      it falls back to allocating a 2048-byte area and copying data there.
      Signed-off-by: NMikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1c0b8a7a
    • M
      hpfs: remember free space · 2cbe5c76
      Mikulas Patocka 提交于
      Previously, hpfs scanned all bitmaps each time the user asked for free
      space using statfs.  This patch changes it so that hpfs scans the
      bitmaps only once, remembes the free space and on next invocation of
      statfs it returns the value instantly.
      
      New versions of wine are hammering on the statfs syscall very heavily,
      making some games unplayable when they're stored on hpfs, with load
      times in minutes.
      
      This should be backported to the stable kernels because it fixes
      user-visible problem (excessive level load times in wine).
      Signed-off-by: NMikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2cbe5c76
  14. 02 2月, 2014 1 次提交
  15. 01 2月, 2014 5 次提交
  16. 31 1月, 2014 5 次提交
  17. 30 1月, 2014 5 次提交