1. 19 9月, 2006 3 次提交
    • F
      [GFS2] Export lm_interface to kernel headers · 7d308590
      Fabio Massimo Di Nitto 提交于
      
      lm_interface.h has a few out of the tree clients such as GFS1
      and userland tools.
      
      Right now, these clients keeps a copy of the file in their build tree
      that can go out of sync.
      
      Move lm_interface.h to include/linux, export it to userland and
      clean up fs/gfs2 to use the new location.
      Signed-off-by: NFabio M. Di Nitto <fabbione@ubuntu.com>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      7d308590
    • A
      [GFS2] inode-diet-eliminate-i_blksize-and-use-a-per-superblock-default-vs-gfs2 · f3b30912
      akpm@osdl.org 提交于
      i_blksize got removed in -mm.
      
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      f3b30912
    • S
      [GFS2] Map multiple blocks at once where possible · 7a6bbacb
      Steven Whitehouse 提交于
      This is a tidy up of the GFS2 bmap code. The main change is that the
      bh is passed to gfs2_block_map allowing the flags to be set directly
      rather than having to repeat that code several times in ops_address.c.
      
      At the same time, the extent mapping code from gfs2_extent_map has
      been moved into gfs2_block_map. This allows all calls to gfs2_block_map
      to map extents in the case that no allocation is taking place. As a
      result reads and non-allocating writes should be faster. A quick test
      with postmark appears to support this.
      
      There is a limit on the number of blocks mapped in a single bmap
      call in that it will only ever map blocks which are pointed to
      from a single pointer block. So in other words, it will never try
      to do additional i/o in order to satisfy read-ahead. The maximum
      number of blocks is thus somewhat less than 512 (the GFS2 4k block
      size minus the header divided by sizeof(u64)). I've further limited
      the mapping of "normal" blocks to 32 blocks (to avoid extra work)
      since readpages() will currently read a maximum of 32 blocks ahead (128k).
      
      Some further work will probably be needed to set a suitable value
      for DIO as well, but for now thats left at the maximum 512 (see
      ops_address.c:gfs2_get_block_direct).
      
      There is probably a lot more that can be done to improve bmap for GFS2,
      but this is a good first step.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      7a6bbacb
  2. 07 9月, 2006 1 次提交
  3. 05 9月, 2006 3 次提交
  4. 04 9月, 2006 1 次提交
  5. 01 9月, 2006 1 次提交
    • S
      [GFS2] Update copyright, tidy up incore.h · e9fc2aa0
      Steven Whitehouse 提交于
      As per comments from Jan Engelhardt <jengelh@linux01.gwdg.de> this
      updates the copyright message to say "version" in full rather than
      "v.2". Also incore.h has been updated to remove forward structure
      declarations which are not required.
      
      The gfs2_quota_lvb structure has now had endianess annotations added
      to it. Also quota.c has been updated so that we now store the
      lvb data locally in endian independant format to avoid needing
      a structure in host endianess too. As a result the endianess
      conversions are done as required at various points and thus the
      conversion routines in lvb.[ch] are no longer required. I've
      moved the one remaining constant in lvb.h thats used into lm.h
      and removed the unused lvb.[ch].
      
      I have not changed the HIF_ constants. That is left to a later patch
      which I hope will unify the gh_flags and gh_iflags fields of the
      struct gfs2_holder.
      
      Cc: Jan Engelhardt <jengelh@linux01.gwdg.de>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      e9fc2aa0
  6. 26 7月, 2006 2 次提交
  7. 15 6月, 2006 1 次提交
    • S
      [GFS2] Fix unlinked file handling · feaa7bba
      Steven Whitehouse 提交于
      This patch fixes the way we have been dealing with unlinked,
      but still open files. It removes all limits (other than memory
      for inodes, as per every other filesystem) on numbers of these
      which we can support on GFS2. It also means that (like other
      fs) its the responsibility of the last process to close the file
      to deallocate the storage, rather than the person who did the
      unlinking. Note that with GFS2, those two events might take place
      on different nodes.
      
      Also there are a number of other changes:
      
       o We use the Linux inode subsystem as it was intended to be
      used, wrt allocating GFS2 inodes
       o The Linux inode cache is now the point which we use for
      local enforcement of only holding one copy of the inode in
      core at once (previous to this we used the glock layer).
       o We no longer use the unlinked "special" file. We just ignore it
      completely. This makes unlinking more efficient.
       o We now use the 4th block allocation state. The previously unused
      state is used to track unlinked but still open inodes.
       o gfs2_inoded is no longer needed
       o Several fields are now no longer needed (and removed) from the in
      core struct gfs2_inode
       o Several fields are no longer needed (and removed) from the in core
      superblock
      
      There are a number of future possible optimisations and clean ups
      which have been made possible by this patch.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      feaa7bba
  8. 19 5月, 2006 2 次提交
  9. 13 5月, 2006 1 次提交
    • S
      [GFS2] Reverse block order in build_height · e90c01e1
      Steven Whitehouse 提交于
      The original code ordered the blocks allocated in the build_height
      routine backwards causing excessive disk seeks during a read of the
      metadata. This patch reverses the order to try and reduce disk seeks.
      
      Example: A five level metadata tree, I = Inode, P = Pointers, D = Data
      
      You need to read the blocks in the order:
      
      I P5 P4 P3 P2 P1 D
      
      in order to read a single data block. The new code now orders the blocks
      in this way. The old code used to order them as:
      
      I P1 P2 P3 P4 P5 D
      
      requiring two extra seeks on average. Note that for files which are
      grown by gradual extension rather than by truncate or by llseek/write
      at a large offset, this doesn't apply. In the case of writing to a
      file linearly, this routine will only be called upon to extend the
      height of the tree by one block at a time, so the ordering is
      determined by when its called rather than by the internals of the
      routine itself. Optimising that part of the ordering is a much
      harder problem.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      e90c01e1
  10. 06 5月, 2006 1 次提交
    • S
      [GFS2] Readpages support · fd88de56
      Steven Whitehouse 提交于
      This adds readpages support (and also corrects a small bug in
      the readpage error path at the same time). Hopefully this will
      improve performance by allowing GFS to submit larger lumps of
      I/O at a time.
      
      In order to simplify the setting of BH_Boundary, it currently gets
      set when we hit the end of a indirect pointer block. There is
      always a boundary at this point with the current allocation code.
      It doesn't get all the boundaries right though, so there is still
      room for improvement in this.
      
      See comments in fs/gfs2/ops_address.c for further information about
      readpages with GFS2.
      
      Signed-off-by: Steven Whitehouse
      fd88de56
  11. 28 4月, 2006 2 次提交
  12. 24 4月, 2006 1 次提交
  13. 29 3月, 2006 1 次提交
    • S
      [GFS2] Further updates to dir and logging code · 71b86f56
      Steven Whitehouse 提交于
      This reduces the size of the directory code by about 3k and gets
      readdir() to use the functions which were introduced in the previous
      directory code update.
      
      Two memory allocations are merged into one. Eliminates zeroing of some
      buffers which were never used before they were initialised by
      other data.
      
      There is still scope for further improvement in the directory code.
      
      On the logging side, a hand created mutex has been replaced by a
      standard Linux mutex in the log allocation code.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      71b86f56
  14. 28 2月, 2006 2 次提交
  15. 08 2月, 2006 1 次提交
    • S
      [GFS2] Make journaled data files identical to normal files on disk · 18ec7d5c
      Steven Whitehouse 提交于
      This is a very large patch, with a few still to be resolved issues
      so you might want to check out the previous head of the tree since
      this is known to be unstable. Fixes for the various bugs will be
      forthcoming shortly.
      
      This patch removes the special data format which has been used
      up till now for journaled data files. Directories still retain the
      old format so that they will remain on disk compatible with earlier
      releases. As a result you can now do the following with journaled
      data files:
      
       1) mmap them
       2) export them over NFS
       3) convert to/from normal files whenever you want to (the zero length
          restriction is gone)
      
      In addition the level at which GFS' locking is done has changed for all
      files (since they all now use the page cache) such that the locking is
      done at the page cache level rather than the level of the fs operations.
      This should mean that things like loopback mounts and other things which
      touch the page cache directly should now work.
      
      Current known issues:
      
       1. There is a lock mode inversion problem related to the resource
          group hold function which needs to be resolved.
       2. Any significant amount of I/O causes an oops with an offset of hex 320
          (NULL pointer dereference) which appears to be related to a journaled data
          buffer appearing on a list where it shouldn't be.
       3. Direct I/O writes are disabled for the time being (will reappear later)
       4. There is probably a deadlock between the page lock and GFS' locks under
          certain combinations of mmap and fs operation I/O.
       5. Issue relating to ref counting on internally used inodes causes a hang
          on umount (discovered before this patch, and not fixed by it)
       6. One part of the directory metadata is different from GFS1 and will need
          to be resolved before next release.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      18ec7d5c
  16. 31 1月, 2006 1 次提交
  17. 24 1月, 2006 1 次提交
  18. 18 1月, 2006 2 次提交
  19. 17 1月, 2006 1 次提交